用皮肤镜图像进行深度学习的黑色素瘤分类最近显示出在自动早期黑色素瘤诊断中的巨大潜力。然而,受到明显的数据失衡和明显的外部伪影的限制,即头发和尺子标记,从皮肤镜图像中提取的判别特征提取非常具有挑战性。在这项研究中,我们试图分别解决这些问题,以更好地表示病变特征。具体而言,基于GAN的数据增强(GDA)策略可与拟议的隐式脱糖(IHD)策略一起生成合成黑色素瘤阳性图像。其中,与头发相关的表示通过辅助分类器网络隐式分散,并反向发送到黑色素瘤 - 特征提取主链,以提供更好的黑色素瘤特异性表示学习。此外,为了训练IHD模块,头发的噪音还标记在ISIC2020数据集上,这使其成为第一个带有类似头发伪影的注释的大型皮肤镜数据集。广泛的实验证明了所提出的框架的优势以及每个组件的有效性。改进的数据集可在https://github.com/kirtsy/dermoscopicdataset上公开可用。
translated by 谷歌翻译
尖峰神经网络(SNNS)是脑激发的模型,可在神经形状硬件上实现节能实现。然而,由于尖刺神经元模型的不连续性,SNN的监督培训仍然是一个难题。大多数现有方法模仿人工神经网络的BackProjagation框架和前馈架构,并在尖峰时间使用代理衍生物或计算梯度来处理问题。这些方法累积近似误差,或者仅通过现有尖峰被限制地传播信息,并且通常需要沿着具有大的内存成本和生物言行的时间步长的信息传播。在这项工作中,我们考虑反馈尖刺神经网络,这些神经网络更为大脑,并提出了一种新的训练方法,不依赖于前向计算的确切反向。首先,我们表明,具有反馈连接的SNN的平均触发速率将沿着时间的时间逐渐发展到均衡状态,这沿着定点方程沿着时间延续。然后通过将反馈SNN的前向计算作为这种等式的黑匣子求解器,并利用了方程上的隐式差异,我们可以计算参数的梯度而不考虑确切的前向过程。以这种方式,向前和向后程序被解耦,因此避免了不可微分的尖峰功能的问题。我们还简要介绍了隐含分化的生物合理性,这只需要计算另一个平衡。在Mnist,Fashion-Mnist,N-Mnist,CiFar-10和CiFar-100上进行了广泛的实验,证明了我们在少量时间步骤中具有较少神经元和参数的反馈模型的方法的优越性。我们的代码是在https://github.com/pkuxmq/ide-fsnn中获得的。
translated by 谷歌翻译
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning and rules on neuromorphic hardware. Other works connect spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that gradients of OTTT can provide a similar descent direction for optimization as gradients based on spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in small time steps.
translated by 谷歌翻译
在非洲使用的2,000多种语言几乎都没有广泛可用的自动语音识别系统,并且所需的数据也仅适用于几种语言。我们已经尝试了两种技术,这些技术可能为非洲语言提供大型词汇识别的途径:多语言建模和自我监督学习。我们收集了可用的开源数据并收集了15种语言的数据,并使用这些技术训练了实验模型。我们的结果表明,汇总多语言端到端模型中可用的少量数据,并预先培训无监督的数据可以帮助提高许多非洲语言的语音识别质量。
translated by 谷歌翻译
在移动设备上的语音模型(在设备个性化)上的个性化是一个活跃的研究领域,但是通常,移动设备比配对的音频文本数据具有更多的仅文本数据。我们探索培训有关仅文本数据的个性化语言模型,该模型在推理期间用于提高该用户的语音识别性能。我们在一个用户群体的Librispeech语料库上进行了实验,并为Gutenberg Project的每个用户提供了个性化的文本数据。我们发布此特定于用户的LibrisPeech(UserLibri)数据集,以帮助未来的个性化研究。LibrisPeech音频转录对分为来自测试清洁数据集的55个用户,另外有52位用户。我们能够降低流媒体和非启动模型中的两个集合中每个用户的平均单词错误率,包括在流式传输时为更难的测试用户组的2.5改进。
translated by 谷歌翻译
由于服务器客户的通信和设备计算的瓶颈,大多数研究联合学习的研究都集中在小型模型上。在这项工作中,我们利用各种技术来缓解这些瓶颈,以在联合学习的跨设备中训练更大的语言模型。借助部分模型培训,量化,有效的转移学习和沟通效率优化器的系统应用,我们能够培训$ 21 $ M的参数变压器和20.2美元的参数构象异构体,这些构象异构体与类似大小相同或更好的困惑LSTM具有$ \ sim10 \ times $ $较小的客户到服务器通信成本,比文献中常见的较小的LSTMS $ 11 \%$ $ $ $。
translated by 谷歌翻译
联合学习用于大量(数百万)边缘移动设备的机器学习模型的分散培训。它充满挑战,因为移动设备通常具有有限的通信带宽和本地计算资源。因此,提高联合学习的效率对于可扩展性和可用性至关重要。在本文中,我们建议利用部分训练的神经网络,该网络在整个训练过程中冻结了一部分模型参数,以降低对模型性能的影响几乎没有影响的通信成本。通过广泛的实验,我们经验证明,部分培训的神经网络(FEDPT)的联合学习可能导致卓越的通信准确性权衡,通信成本高达46美元,以小的准确度成本。我们的方法还实现了更快的培训,具有较小的内存占用空间,更好的效用,以便强​​大的差异隐私保证。对于推动设备上学习中的过度参数化的局限性,所提出的FEDPT方法可以特别有趣。
translated by 谷歌翻译
TRUECASING是通过用于语音识别或机器翻译或人类的自动系统而恢复噪声文本的正确案例(大写或小写)的任务。它可以提高下游NLP任务的性能,例如命名实体识别和语言建模。我们提出了一种快速,准确,紧凑的双层分层词和性格的经常性神经网络模型,首先是这个问题的第一个。使用序列蒸馏,我们还解决了Truecasing的问题,同时忽略了句子中的令牌位置,即以位不变的方式。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译